5 research outputs found

    ST-CapsNet: Linking Spatial and Temporal Attention with Capsule Network for P300 Detection Improvement

    Get PDF
    A brain-computer interface (BCI), which provides an advanced direct human-machine interaction, has gained substantial research interest in the last decade for its great potential in various applications including rehabilitation and communication. Among them, the P300-based BCI speller is a typical application that is capable of identifying the expected stimulated characters. However, the applicability of the P300 speller is hampered for the low recognition rate partially attributed to the complex spatio-temporal characteristics of the EEG signals. Here, we developed a deep-learning analysis framework named ST-CapsNet to overcome the challenges regarding better P300 detection using a capsule network with both spatial and temporal attention modules. Specifically, we first employed spatial and temporal attention modules to obtain refined EEG signals by capturing event-related information. Then the obtained signals were fed into the capsule network for discriminative feature extraction and P300 detection. In order to quantitatively assess the performance of the proposed ST-CapsNet, two publicly-available datasets (i.e., Dataset IIb of BCI Competition 2003 and Dataset II of BCI Competition III) were applied. A new metric of averaged symbols under repetitions (ASUR) was adopted to evaluate the cumulative effect of symbol recognition under different repetitions. In comparison with several widely-used methods (i.e., LDA, ERP-CapsNet, CNN, MCNN, SWFP, and MsCNN-TL-ESVM), the proposed ST-CapsNet framework significantly outperformed the state-of-the-art methods in terms of ASUR. More interestingly, the absolute values of the spatial filters learned by ST-CapsNet are higher in the parietal lobe and occipital region, which is consistent with the generation mechanism of P300

    Diverse Feature Blend Based on Filter-Bank Common Spatial Pattern and Brain Functional Connectivity for Multiple Motor Imagery Detection

    Get PDF
    Motor imagery (MI) based brain-computer interface (BCI) is a research hotspot and has attracted lots of attention. Within this research topic, multiple MI classification is a challenge due to the difficulties caused by time-varying spatial features across different individuals. To deal with this challenge, we tried to fuse brain functional connectivity (BFC) and one-versus-the-rest filter-bank common spatial pattern (OVR-FBCSP) to improve the robustness of classification. The BFC features were extracted by phase locking value (PLV), representing the brain inter-regional interactions relevant to the MI, whilst the OVR-FBCSP is used to extract the spatial-frequency features related to the MI. These diverse features were then fed into a multi-kernel relevance vector machine (MK-RVM). The dataset with three motor imagery tasks (left hand MI, right hand MI, and feet MI) was used to assess the proposed method. Experimental results not only showed that the cascade structure of diverse feature fusion and MK-RVM achieved satisfactory classification performance (average accuracy: 83.81%, average kappa: 0.76), but also demonstrated that BFC plays a supplementary role in the MI classification. Moreover, the proposed method has a potential to be integrated into multiple MI online detection owing to the advantage of strong time-efficiency of RVM

    An Inverse-Free and Scalable Sparse Bayesian Extreme Learning Machine for Classification Problems

    No full text
    Sparse Bayesian Extreme Learning Machine (SBELM) constructs an extremely sparse and probabilistic model with low computational cost and high generalization. However, the update rule of hyperparameters (ARD prior) in SBELM involves using the diagonal elements from the inversion of the covariance matrix with the full training dataset, which raises the following two issues. Firstly, inverting the Hessian matrix may suffer ill-conditioning issues in some cases, which hinders SBELM from converging. Secondly, it may result in the memory-overflow issue with computational memory O(L3)O(L^{3}) ( LL : number of hidden nodes) to invert the big covariance matrix for updating the ARD priors. To address these issues, an inverse-free SBELM called QN-SBELM is proposed in this paper, which integrates the gradient-based Quasi-Newton (QN) method into SBELM to approximate the inverse covariance matrix. It takes O(L2)O(L^{2}) computational complexity and is simultaneously scalable to large problems. QN-SBELM was evaluated on benchmark datasets of different sizes. Experimental results verify that QN-SBELM achieves more accurate results than SBELM with a sparser model, and also provides more stable solutions and a great extension to large-scale problems

    Self-Attentive Channel-Connectivity Capsule Network for EEG-Based Driving Fatigue Detection

    No full text
    Deep neural networks have recently been successfully extended to EEG-based driving fatigue detection. Nevertheless, most existing models fail to reveal the intrinsic inter-channel relations that are known to be beneficial for EEG-based classification. Additionally, these models require substantial data for training, which is often impractical due to the high cost of data collection. To simultaneously address these two issues, we propose a Self-Attentive Channel-Connectivity Capsule Network (SACC-CapsNet) for EEG-based driving fatigue detection in this paper. SACC-CapsNet starts with a temporal-channel attention module to investigate the critical temporal information and important channels for driving fatigue detection, refining the input EEG signals. Subsequently, the refined EEG data are transformed into a channel covariance matrix to capture the inter-channel relations, followed by selective kernel attention to extract the highly discriminative channel-connectivity features. Finally, a capsule neural network is employed to effectively learn the relationships between connectivity features, which is more suitable for limited data. To confirm the effectiveness of SACC-CapsNet, we collected 24-channel EEG data from 31 subjects (mean age=23.13±2.68 years, male/female=18/13) in a simulated fatigue driving environment. Extensive experiments were conducted with the acquired data, and the comparison results show that our proposed model outperforms state-of-the-art methods. Additionally, the channel covariance matrix learned from SACC-CapsNet reveals that the frontal pole is most informative for detecting driving fatigue, followed by the parietal and central regions. Intriguingly, the temporal-channel attention module can enhance the significance of these critical regions, and the reconstructed channel covariance matrix generated by the decoder network of SACC-CapsNet can effectively preserve valuable information about them
    corecore